Functions of a Matrix
The motivation is to study matrix-valued functions, which stems from the differential equations describing linear systems.
x ˙ = A x ( t ) \large \dot{x} = A x(t) x ˙ = A x ( t )
x ( t ) = e A t x ( 0 ) \large x(t) = e^{At} x(0) x ( t ) = e A t x ( 0 )
The power series representation of any function is
f ( s ) = ∑ i = 0 ∞ α i s i \large f(s) = \sum_{i=0}^{\infty} \alpha_i s^i f ( s ) = i = 0 ∑ ∞ α i s i
where s ∈ C s \in \mathbb{C} s ∈ C .
The power series representation of a matrix-valued function is
f ( A ) = ∑ i = 0 ∞ α i A i \large f(A) = \sum_{i=0}^{\infty} \alpha_iA^i f ( A ) = i = 0 ∑ ∞ α i A i
where A ∈ C n × n A \in \mathbb{C}^{n \times n} A ∈ C n × n . This solution is another matrix as the same size as A A A .
The exponential function is defined for matrices as
e t = ∑ i = 0 ∞ t i i ! ⟹ e A = ∑ i = 0 ∞ A i i ! \large e^t = \sum_{i=0}^{\infty} \frac{t^i}{i!} \implies e^A = \sum_{i=0}^{\infty} \frac{A^i}{i!} e t = i = 0 ∑ ∞ i ! t i ⟹ e A = i = 0 ∑ ∞ i ! A i
By using Cayley-Hamilton Theorem, we can write A i A^i A i as a linear combination of I , A , A 2 , ⋯ , A n − 1 I,A,A^2,\cdots,A^{n-1} I , A , A 2 , ⋯ , A n − 1 .
e A = c 0 I + c 1 A + c 2 A 2 + ⋯ + c n − 1 A n − 1 \large e^A = c_0 I + c_1 A + c_2 A^2 + \cdots + c_{n-1} A^{n-1} e A = c 0 I + c 1 A + c 2 A 2 + ⋯ + c n − 1 A n − 1
where c i c_i c i are scalars.
Remark: One can use the minimal polynomial of a matrix to express the l l l th power of a matrix in terms of I , A , A 2 , ⋯ , A l − 1 I,A,A^2,\cdots,A^{l-1} I , A , A 2 , ⋯ , A l − 1 . l l l is the order of the minimal polynomial.
e A = c 0 I + c 1 A + c 2 A 2 + ⋯ + c l − 1 A l − 1 \large e^A = c_0 I + c_1 A + c_2 A^2 + \cdots + c_{l-1} A^{l-1} e A = c 0 I + c 1 A + c 2 A 2 + ⋯ + c l − 1 A l − 1
First Method
Let
f ( s ) = ∑ i = 0 ∞ α i s i f(s) = \sum_{i=0}^{\infty} \alpha_i s^i f ( s ) = i = 0 ∑ ∞ α i s i
f ( A ) = ∑ i = 0 ∞ α i A i f(A) = \sum_{i=0}^{\infty} \alpha_i A^i f ( A ) = i = 0 ∑ ∞ α i A i
Define p ( s ) p(s) p ( s ) and P ( A ) P(A) P ( A ) as follows
p ( s ) = c 0 + c 1 s + c 2 s 2 + ⋯ + c l − 1 s l − 1 \large p(s) = c_0 + c_1 s + c_2 s^2 + \cdots + c_{l-1} s^{l-1} p ( s ) = c 0 + c 1 s + c 2 s 2 + ⋯ + c l − 1 s l − 1
P ( A ) = c 0 I + c 1 A + c 2 A 2 + ⋯ + c l − 1 A l − 1 \large P(A) = c_0 I + c_1 A + c_2 A^2 + \cdots + c_{l-1} A^{l-1} P ( A ) = c 0 I + c 1 A + c 2 A 2 + ⋯ + c l − 1 A l − 1
Then we have the equality
f ( A ) = P ( A ) \large f(A) = P(A) f ( A ) = P ( A )
where P ( A ) P(A) P ( A ) is a polynomial of A A A .
Case I: A A A is diagonalizable
Suppose
m ( s ) = ( s − λ 1 ) ( s − λ 2 ) ⋯ ( s − λ σ ) \large m(s) = (s - \lambda_1 ) (s - \lambda_2 ) \cdots (s - \lambda_{\sigma} ) m ( s ) = ( s − λ 1 ) ( s − λ 2 ) ⋯ ( s − λ σ )
l = σ l = \sigma l = σ , m 1 = m 2 = ⋯ = m σ = 1 m_1 = m_2 = \cdots = m_{\sigma} = 1 m 1 = m 2 = ⋯ = m σ = 1
f ( A ) = P ( A ) = c 0 I + c 1 A + c 2 A 2 + ⋯ + c l − 1 A l − 1 \large f(A) = P(A) = c_0 I + c_1 A + c_2 A^2 + \cdots + c_{l-1} A^{l-1} f ( A ) = P ( A ) = c 0 I + c 1 A + c 2 A 2 + ⋯ + c l − 1 A l − 1
A e i = λ i e i Ae_i = \lambda_i e_i A e i = λ i e i
Let e i e_i e i be the eigenvector of A A A corresponding to λ i \lambda_i λ i , multiply both sides by e i e_i e i .
∑ n = 0 l − 1 α i A n e i = ∑ n = 0 l − 1 c n λ i n e i \sum_{n=0}^{l-1} \alpha_i A^n e_i = \sum_{n=0}^{l-1} c_n \lambda_i^n e_i ∑ n = 0 l − 1 α i A n e i = ∑ n = 0 l − 1 c n λ i n e i
∑ n = 0 l − 1 α i λ i n e i = ∑ n = 0 l − 1 c n λ i n e i \sum_{n=0}^{l-1} \alpha_i \lambda_i^n e_i = \sum_{n=0}^{l-1} c_n \lambda_i^n e_i ∑ n = 0 l − 1 α i λ i n e i = ∑ n = 0 l − 1 c n λ i n e i
α i λ i n = c n λ i n \alpha_i \lambda_i^n = c_n \lambda_i^n α i λ i n = c n λ i n
α i = c n \alpha_i = c_n α i = c n
f ( A ) = P ( A ) = ∑ i = 0 l − 1 α i A i = ∑ i = 0 l − 1 c i A i \large f(A) = P(A) = \sum_{i=0}^{l-1} \alpha_i A^i = \sum_{i=0}^{l-1} c_i A^i f ( A ) = P ( A ) = ∑ i = 0 l − 1 α i A i = ∑ i = 0 l − 1 c i A i
f ( λ i ) = P ( λ i ) \large f(\lambda_i) = P(\lambda_i) f ( λ i ) = P ( λ i )
Example : A = [ 2 1 1 2 ] A = \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} A = [ 2 1 1 2 ] find e A e^A e A and l o g ( A ) log(A) l o g ( A )
Solution :
d ( s ) = ( s − 3 ) ( s − 1 ) d(s) = (s - 3)(s - 1) d ( s ) = ( s − 3 ) ( s − 1 )
λ 1 = 3 \lambda_1 = 3 λ 1 = 3 , λ 2 = 1 \lambda_2 = 1 λ 2 = 1
m ( s ) = ( s − 3 ) ( s − 1 ) m(s) = (s - 3)(s - 1) m ( s ) = ( s − 3 ) ( s − 1 ) , l = 2 l = 2 l = 2
p ( s ) = c 0 + c 1 s p(s) = c_0 + c_1 s p ( s ) = c 0 + c 1 s , and P ( A ) = c 0 I + c 1 A P(A) = c_0 I + c_1 A P ( A ) = c 0 I + c 1 A
A e 1 = 3 e 1 Ae_1 = 3e_1 A e 1 = 3 e 1 , A e 2 = e 2 Ae_2 = e_2 A e 2 = e 2
f ( 3 ) = p ( 3 ) f ( 1 ) = p ( 1 ) f(3) = p(3) \\
f(1) = p(1) f ( 3 ) = p ( 3 ) f ( 1 ) = p ( 1 )
f ( 3 ) = e 3 = c 0 + 3 c 1 f ( 1 ) = e = c 0 + c 1 f(3) = e^3 = c_0 + 3c_1 \\
f(1) = \ e \ = c_0 + c_1 f ( 3 ) = e 3 = c 0 + 3 c 1 f ( 1 ) = e = c 0 + c 1
c 1 = e 3 − e 2 c 0 = 3 e − e 3 2 c_1 = \frac{e^3 - e}{2} \\
c_0 = \frac{3e - e^3}{2} c 1 = 2 e 3 − e c 0 = 2 3 e − e 3
e A = 3 e − e 3 2 I + e 3 − e 2 A e^A = \frac{3e - e^3}{2} I + \frac{e^3 - e}{2} A e A = 2 3 e − e 3 I + 2 e 3 − e A
e A = 3 e − e 3 2 [ 1 0 0 1 ] + e 3 − e 2 [ 2 1 1 2 ] e^A = \frac{3e - e^3}{2} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} + \frac{e^3 - e}{2} \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} e A = 2 3 e − e 3 [ 1 0 0 1 ] + 2 e 3 − e [ 2 1 1 2 ]
Case II: A A A is not diagonalizable
Consider the following example. Let A ∈ R 3 A \in \mathbb{R}^3 A ∈ R 3 and m ( s ) = ( s − λ 1 ) 2 ( s − λ 2 ) m(s) = (s-\lambda_1)^2(s-\lambda_2) m ( s ) = ( s − λ 1 ) 2 ( s − λ 2 ) .
J = [ λ 1 1 0 0 λ 1 0 0 0 λ 2 ] \large J = \begin{bmatrix} \lambda_1 & 1 & 0 \\ 0 & \lambda_1 & 0 \\ 0 & 0 & \lambda_2 \end{bmatrix} J = ⎣ ⎡ λ 1 0 0 1 λ 1 0 0 0 λ 2 ⎦ ⎤
∑ m i = 2 + 1 = 3 \sum m_i = 2 + 1 = 3 ∑ m i = 2 + 1 = 3
f ( A ) = P ( A ) = c 0 I + c 1 A + c 2 A 2 f(A) = P(A) = c_0 I + c_1 A + c_2 A^2 f ( A ) = P ( A ) = c 0 I + c 1 A + c 2 A 2
f ( λ 1 ) = P ( λ 1 ) f ( λ 2 ) = P ( λ 2 ) f(\lambda_1) = P(\lambda_1) \ \ \ f(\lambda_2) = P(\lambda_2) f ( λ 1 ) = P ( λ 1 ) f ( λ 2 ) = P ( λ 2 )
These two equations are not enough to find c 0 , c 1 , c 2 c_0,c_1,c_2 c 0 , c 1 , c 2 .
Consider the matrix P P P that transforms A A A into its Jordan canonical form J J J .
We know that P − 1 = P ∗ P^{-1} = P^* P − 1 = P ∗ and P ∗ A P = J P^*AP = J P ∗ A P = J , and P P P is in the form of [ e 1 , f 1 , e 2 ] [e_1,f_1,e_2] [ e 1 , f 1 , e 2 ] . Where e 1 e_1 e 1 and e 2 e_2 e 2 are the eigenvectors of A A A corresponding to λ 1 \lambda_1 λ 1 and λ 2 \lambda_2 λ 2 respectively. f 1 f_1 f 1 is the generalized eigenvector of A A A corresponding to λ 1 \lambda_1 λ 1 .
P = [ ⋮ ⋮ ⋮ e 1 f 1 e 2 ⋮ ⋮ ⋮ ] \large P = \begin{bmatrix} \vdots & \vdots & \vdots \\ e_1 & f_1 & e_2 \\ \vdots & \vdots & \vdots \end{bmatrix} P = ⎣ ⎡ ⋮ e 1 ⋮ ⋮ f 1 ⋮ ⋮ e 2 ⋮ ⎦ ⎤
[ ⋮ ⋮ ⋮ e 1 f 1 e 2 ⋮ ⋮ ⋮ ] [ λ 1 1 0 0 λ 1 0 0 0 λ 2 ] = A [ ⋮ ⋮ ⋮ e 1 f 1 e 2 ⋮ ⋮ ⋮ ] \large \begin{bmatrix} \vdots & \vdots & \vdots \\ e_1 & f_1 & e_2 \\ \vdots & \vdots & \vdots \end{bmatrix} \begin{bmatrix} \lambda_1 & 1 & 0 \\ 0 & \lambda_1 & 0 \\ 0 & 0 & \lambda_2 \end{bmatrix} = A \begin{bmatrix} \vdots & \vdots & \vdots \\ e_1 & f_1 & e_2 \\ \vdots & \vdots & \vdots \end{bmatrix} ⎣ ⎡ ⋮ e 1 ⋮ ⋮ f 1 ⋮ ⋮ e 2 ⋮ ⎦ ⎤ ⎣ ⎡ λ 1 0 0 1 λ 1 0 0 0 λ 2 ⎦ ⎤ = A ⎣ ⎡ ⋮ e 1 ⋮ ⋮ f 1 ⋮ ⋮ e 2 ⋮ ⎦ ⎤
A f 1 = λ 1 f 1 + e 1 A 2 f 1 = λ 1 A f 1 + A e 1 = λ 1 2 f 1 + λ 1 e 1 + λ 1 e 1 = λ 1 2 f 1 + 2 λ 1 e 1 A 3 f 1 = λ 1 2 A f 1 + 2 λ 1 A e 1 = λ 1 3 f 1 + 3 λ 1 2 e 1 ⋮ A k f 1 = λ 1 k f 1 + k λ 1 k − 1 e 1 \begin{align*} \large Af_1 \ & = \lambda_1 f_1 + e_1 \\
\large A^2f_1 & = \lambda_1 Af_1 + Ae_1 \\
\large & = \lambda_1^2 f_1 + \lambda_1 e_1 + \lambda_1 e_1 = \lambda_1^2 f_1 + 2 \lambda_1 e_1 \\
\large A^3f_1 & = \lambda_1^2 Af_1 + 2\lambda_1 Ae_1 = \lambda_1^3 f_1 + 3 \large \lambda_1^2 e_1 \\
& \ \ \ \vdots \\
\large A^kf_1 & = \lambda_1^k f_1 + k \lambda_1^{k-1} e_1 \\
\end{align*} A f 1 A 2 f 1 A 3 f 1 A k f 1 = λ 1 f 1 + e 1 = λ 1 A f 1 + A e 1 = λ 1 2 f 1 + λ 1 e 1 + λ 1 e 1 = λ 1 2 f 1 + 2 λ 1 e 1 = λ 1 2 A f 1 + 2 λ 1 A e 1 = λ 1 3 f 1 + 3 λ 1 2 e 1 ⋮ = λ 1 k f 1 + k λ 1 k − 1 e 1
Return to the equation f ( A ) = P ( A ) f(A) = P(A) f ( A ) = P ( A ) .
∑ i = 0 l − 1 α i A i = ∑ i = 0 l − 1 c i A i \large \sum_{i=0}^{l-1} \alpha_i A^i = \sum_{i=0}^{l-1} c_i A^i i = 0 ∑ l − 1 α i A i = i = 0 ∑ l − 1 c i A i
Multiply both sides by f 1 f_1 f 1 from the right.
∑ i = 0 l − 1 α i A i f 1 = ∑ i = 0 l − 1 c i A i f 1 \large \sum_{i=0}^{l-1} \alpha_i A^i f_1 = \sum_{i=0}^{l-1} c_i A^i f_1 i = 0 ∑ l − 1 α i A i f 1 = i = 0 ∑ l − 1 c i A i f 1
∑ i = 0 l − 1 α i λ 1 i f 1 + ∑ i = 0 l − 1 α i i λ 1 i − 1 e 1 = ∑ i = 0 l − 1 c i λ 1 i f 1 + ∑ i = 0 l − 1 c i i λ 1 i − 1 e 1 \large \sum_{i=0}^{l-1} \alpha_i \lambda_1^i f_1 + \sum_{i=0}^{l-1} \alpha_i i \lambda_1^{i-1} e_1 = \sum_{i=0}^{l-1} c_i \lambda_1^i f_1 + \sum_{i=0}^{l-1} c_i i \lambda_1^{i-1} e_1 i = 0 ∑ l − 1 α i λ 1 i f 1 + i = 0 ∑ l − 1 α i i λ 1 i − 1 e 1 = i = 0 ∑ l − 1 c i λ 1 i f 1 + i = 0 ∑ l − 1 c i i λ 1 i − 1 e 1
f ( λ 1 ) f 1 + f ′ ( λ 1 ) e 1 = P ( λ 1 ) f 1 + P ′ ( λ 1 ) e 1 \large f(\lambda_1) f_1 + f'(\lambda_1) e_1 = P(\lambda_1) f_1 + P'(\lambda_1) e_1 f ( λ 1 ) f 1 + f ′ ( λ 1 ) e 1 = P ( λ 1 ) f 1 + P ′ ( λ 1 ) e 1
f ( λ 1 ) = P ( λ 1 ) \large f(\lambda_1) = P(\lambda_1) f ( λ 1 ) = P ( λ 1 )
f ′ ( λ 1 ) = P ′ ( λ 1 ) \large f'(\lambda_1) = P'(\lambda_1) f ′ ( λ 1 ) = P ′ ( λ 1 )
Which is the additional equation we need to find c 0 , c 1 , c 2 c_0,c_1,c_2 c 0 , c 1 , c 2 .
General Case
Let m ( s ) = ( s − λ 1 ) m 1 ( s − λ 2 ) m 2 ⋯ ( s − λ σ ) m σ m(s) = (s-\lambda_1)^{m_1}(s-\lambda_2)^{m_2} \cdots (s-\lambda_{\sigma})^{m_{\sigma}} m ( s ) = ( s − λ 1 ) m 1 ( s − λ 2 ) m 2 ⋯ ( s − λ σ ) m σ . We have the following set of equations.
f ( t ) ( λ i ) = P ( t ) ( λ i ) \large f^{(t)}(\lambda_i) = P^{(t)}(\lambda_i) f ( t ) ( λ i ) = P ( t ) ( λ i )
where t = 0 , 1 , 2 , ⋯ , m i − 1 t = 0,1,2,\cdots,m_i-1 t = 0 , 1 , 2 , ⋯ , m i − 1 and i = 1 , 2 , ⋯ , σ i = 1,2,\cdots,\sigma i = 1 , 2 , ⋯ , σ . We have ∑ i = 1 σ m i = l \sum_{i=1}^{\sigma} m_i = l ∑ i = 1 σ m i = l equations.
Example : A = [ 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 ] A = \begin{bmatrix} 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix} A = ⎣ ⎡ 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 1 ⎦ ⎤ Find sin ( π A ) \sin{(\pi A)} sin ( π A ) .
Solution :
m ( s ) = s 3 ( s − 1 ) 2 m(s) = s^3(s-1)^2 m ( s ) = s 3 ( s − 1 ) 2 and the order of the total minimal polynomial is l = 5 l = 5 l = 5 .
p ( s ) = c 0 + c 1 s + c 2 s 2 + c 3 s 3 + c 4 s 4 p(s) = c_0 + c_1 s + c_2 s^2 + c_3 s^3 + c_4 s^4 p ( s ) = c 0 + c 1 s + c 2 s 2 + c 3 s 3 + c 4 s 4
p ( A ) = c 0 I + c 1 A + c 2 A 2 + c 3 A 3 + c 4 A 4 p(A) = c_0 I + c_1 A + c_2 A^2 + c_3 A^3 + c_4 A^4 p ( A ) = c 0 I + c 1 A + c 2 A 2 + c 3 A 3 + c 4 A 4
We need five equations to find c 0 , c 1 , c 2 , c 3 , c 4 c_0,c_1,c_2,c_3,c_4 c 0 , c 1 , c 2 , c 3 , c 4 .
Starting with
f ( s ) = sin ( π s ) f ′ ( s ) = π cos ( π s ) f ′ ′ ( s ) = − π 2 sin ( π s ) \begin{align*} f(s) &= \sin{(\pi s)} \\
f'(s) &= \pi \cos{(\pi s)} \\
f''(s) &= -\pi^2 \sin{(\pi s)} \end{align*} f ( s ) f ′ ( s ) f ′′ ( s ) = sin ( π s ) = π cos ( π s ) = − π 2 sin ( π s )
Then
p ( s ) = c 0 + c 1 s + c 2 s 2 + c 3 s 3 + c 4 s 4 p ′ ( s ) = c 1 + 2 c 2 s + 3 c 3 s 2 + 4 c 4 s 3 p ′ ′ ( s ) = 2 c 2 + 6 c 3 s + 12 c 4 s 2 \begin{align*} p(s) &= c_0 + c_1 s + c_2 s^2 + c_3 s^3 + c_4 s^4 \\
p'(s) &= c_1 + 2c_2 s + 3c_3 s^2 + 4c_4 s^3 \\
p''(s) &= 2c_2 + 6c_3 s + 12c_4 s^2\end{align*} p ( s ) p ′ ( s ) p ′′ ( s ) = c 0 + c 1 s + c 2 s 2 + c 3 s 3 + c 4 s 4 = c 1 + 2 c 2 s + 3 c 3 s 2 + 4 c 4 s 3 = 2 c 2 + 6 c 3 s + 12 c 4 s 2
For the first eigenvalue λ 1 = 0 \lambda_1 = 0 λ 1 = 0 . We have m 1 = 3 m_1 = 3 m 1 = 3 .
f ( λ 1 ) = p ( λ 1 ) f ′ ( λ 1 ) = p ′ ( λ 1 ) f ′ ′ ( λ 1 ) = p ′ ′ ( λ 1 ) \begin{align*} & f(\lambda_1) = p(\lambda_1) \\
& f'(\lambda_1) = p'(\lambda_1) \\
& f''(\lambda_1) = p''(\lambda_1) \end{align*} f ( λ 1 ) = p ( λ 1 ) f ′ ( λ 1 ) = p ′ ( λ 1 ) f ′′ ( λ 1 ) = p ′′ ( λ 1 )
f ( 0 ) = p ( 0 ) f ′ ( 0 ) = p ′ ( 0 ) f ′ ′ ( 0 ) = p ′ ′ ( 0 ) \begin{align*} & f(0) = p(0) \\
& f'(0) = p'(0) \\
& f''(0) = p''(0) \end{align*} f ( 0 ) = p ( 0 ) f ′ ( 0 ) = p ′ ( 0 ) f ′′ ( 0 ) = p ′′ ( 0 )
sin ( 0 ) = 0 = c 0 π cos ( 0 ) = π = c 1 − π 2 sin ( 0 ) = 0 = 2 c 2 \begin{align*} & \sin{(0)} = 0 = c_0 \\
& \pi \cos{(0)} = \pi = c_1 \\
& -\pi^2 \sin{(0)} = 0 = 2c_2 \end{align*} sin ( 0 ) = 0 = c 0 π cos ( 0 ) = π = c 1 − π 2 sin ( 0 ) = 0 = 2 c 2
For the second eigenvalue λ 2 = 1 \lambda_2 = 1 λ 2 = 1 . We have m 2 = 2 m_2 = 2 m 2 = 2 .
f ( λ 2 ) = p ( λ 2 ) f ′ ( λ 2 ) = p ′ ( λ 2 ) \begin{align*} & f(\lambda_2) = p(\lambda_2) \\
& f'(\lambda_2) = p'(\lambda_2) \end{align*} f ( λ 2 ) = p ( λ 2 ) f ′ ( λ 2 ) = p ′ ( λ 2 )
sin ( π ) = 0 = c 0 + c 1 + c 2 + c 3 + c 4 π cos ( π ) = − π = c 1 + 2 c 2 + 3 c 3 + 4 c 4 \begin{align*} & \sin{(\pi)} = 0 = c_0 + c_1 + c_2 + c_3 + c_4 \\
& \pi \cos{(\pi)} = -\pi = c_1 + 2c_2 + 3c_3 + 4c_4 \end{align*} sin ( π ) = 0 = c 0 + c 1 + c 2 + c 3 + c 4 π cos ( π ) = − π = c 1 + 2 c 2 + 3 c 3 + 4 c 4
[ 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 ] [ c 0 c 1 c 2 c 3 c 4 ] = [ 0 π 0 − π − π ] \large \begin{bmatrix} 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} c_0 \\ c_1 \\ c_2 \\ c_3 \\ c_4 \end{bmatrix} = \begin{bmatrix} 0 \\ \pi \\ 0 \\ -\pi \\ -\pi \end{bmatrix} ⎣ ⎡ 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 1 ⎦ ⎤ ⎣ ⎡ c 0 c 1 c 2 c 3 c 4 ⎦ ⎤ = ⎣ ⎡ 0 π 0 − π − π ⎦ ⎤
[ c 0 c 1 c 2 c 3 c 4 ] = [ 0 π 0 − 2 π π ] \large \begin{bmatrix} c_0 \\ c_1 \\ c_2 \\ c_3 \\ c_4 \end{bmatrix} = \begin{bmatrix} 0 \\ \pi \\ 0 \\ -2\pi \\ \pi \end{bmatrix} ⎣ ⎡ c 0 c 1 c 2 c 3 c 4 ⎦ ⎤ = ⎣ ⎡ 0 π 0 − 2 π π ⎦ ⎤
p ( s ) = π s − 2 π s 3 + π s 4 \large p(s) = \pi s - 2\pi s^3 + \pi s^4 p ( s ) = π s − 2 π s 3 + π s 4
sin ( π A ) = π A − 2 π A 3 + π A 4 \large \sin{(\pi A)} = \pi A - 2\pi A^3 + \pi A^4 sin ( π A ) = π A − 2 π A 3 + π A 4
Remark: f ( A ) f(A) f ( A ) does not exist when f ( t ) ( λ i ) f^{(t)}(\lambda_i) f ( t ) ( λ i ) does not exist for some t t t and i i i .
#EE501 - Linear Systems Theory at METU